我们提供了有关诱导模型稀疏性如何帮助实现构图概括和在基础语言学习问题中更好的样本效率的研究。我们考虑在网格世界环境中具有简单的语言条件导航问题,并进行了分离的观察。我们表明,标准的神经体系结构并不总是产生组成概括。为了解决这个问题,我们设计了一个包含目标标识模块的代理,该模块鼓励教学和对象的属性中的单词之间的稀疏相关性,并将它们组合在一起以找到目标。目标标识模块的输出是对值迭代网络计划者的输入。即使从少数示威活动中学习,我们的代理商在包含属性的新颖组合的目标上保持了高度的性能。我们检查了代理的内部表示,并在单词中的字典和环境中的属性中找到正确的对应关系。
translated by 谷歌翻译
我们表明,具有内置关系偏差的深度学习模型可以带来利益来采样高效的学习,而无需依赖广泛的数据增强。所提出的单次分类模型以局部和成对注意的形式执行一对输入的关系匹配。我们的方法完美地解决了单次图像分类omniglot挑战。我们的模型超过人力学精度,以及以前的现有技术,没有数据增强。
translated by 谷歌翻译
我们利用深度顺序模型来解决预测患者医疗保健利用的问题,这可能有助于政府更好地为未来的医疗保健使用提供资源。具体地,我们研究\纺织{发散亚组}的问题,其中较小的人口小组中的结果分布大大偏离了一般人群的群体。如果亚组的尺寸非常小(例如,稀有疾病),则对不同亚组的专业模型建造专门模型的传统方法可能是有问题的。为了解决这一挑战,我们首先开发一种新的无关注顺序模型,SANSFORMERS,灌输了适合在电子医疗记录中建模临床码的归纳偏差。然后,我们通过在整个健康登记处预先培训每个模型(接近100万名患者)之前,设计了一个特定的自我监督目标,并展示其有效性,特别是稀缺数据设置,特别是在整个健康登记处(接近一百万名患者)进行微调下游任务不同的子组。我们使用两个数据来源与LSTM和变压器模型进行比较新的SANSFARER架构和辅助医疗利用预测的多任务学习目标。凭经验,无关注的Sansformer模型在实验中始终如一地执行,在大多数情况下以至少$ \ SIM 10 $ \%表现出在大多数情况下的基线。此外,在预测医院访问数量时,自我监督的预训练将在整个始终提高性能,例如通过超过$ \ sim 50 $ \%(和高度为800美元\%)。
translated by 谷歌翻译
在许多控制问题中,包括视觉,可以从场景中对象的位置推断出最佳控制。可以使用特征点表示该信息,该特征点是输入图像的学习特征映射中的空间位置列表。以前的作品表明,使用无监督的预培训或人类监督学习的功能要点可以为控制任务提供良好的功能。在本文中,我们表明,可以在结束于结束的情况下学习有效的特征点表示,而无需无监督的预训练,解码器或额外损失。我们所提出的架构包括一个可怜的特征点提取器,其将估计的特征点的坐标直接馈送到软演员 - 批评者代理。所提出的算法对深度控制套件任务的最先进的算法产生了竞争力。
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
Charisma is considered as one's ability to attract and potentially also influence others. Clearly, there can be considerable interest from an artificial intelligence's (AI) perspective to provide it with such skill. Beyond, a plethora of use cases opens up for computational measurement of human charisma, such as for tutoring humans in the acquisition of charisma, mediating human-to-human conversation, or identifying charismatic individuals in big social data. A number of models exist that base charisma on various dimensions, often following the idea that charisma is given if someone could and would help others. Examples include influence (could help) and affability (would help) in scientific studies or power (could help), presence, and warmth (both would help) as a popular concept. Modelling high levels in these dimensions for humanoid robots or virtual agents, seems accomplishable. Beyond, also automatic measurement appears quite feasible with the recent advances in the related fields of Affective Computing and Social Signal Processing. Here, we, thereforem present a blueprint for building machines that can appear charismatic, but also analyse the charisma of others. To this end, we first provide the psychological perspective including different models of charisma and behavioural cues of it. We then switch to conversational charisma in spoken language as an exemplary modality that is essential for human-human and human-computer conversations. The computational perspective then deals with the recognition and generation of charismatic behaviour by AI. This includes an overview of the state of play in the field and the aforementioned blueprint. We then name exemplary use cases of computational charismatic skills before switching to ethical aspects and concluding this overview and perspective on building charisma-enabled AI.
translated by 谷歌翻译
Deep learning-based 3D human pose estimation performs best when trained on large amounts of labeled data, making combined learning from many datasets an important research direction. One obstacle to this endeavor are the different skeleton formats provided by different datasets, i.e., they do not label the same set of anatomical landmarks. There is little prior research on how to best supervise one model with such discrepant labels. We show that simply using separate output heads for different skeletons results in inconsistent depth estimates and insufficient information sharing across skeletons. As a remedy, we propose a novel affine-combining autoencoder (ACAE) method to perform dimensionality reduction on the number of landmarks. The discovered latent 3D points capture the redundancy among skeletons, enabling enhanced information sharing when used for consistency regularization. Our approach scales to an extreme multi-dataset regime, where we use 28 3D human pose datasets to supervise one model, which outperforms prior work on a range of benchmarks, including the challenging 3D Poses in the Wild (3DPW) dataset. Our code and models are available for research purposes.
translated by 谷歌翻译
This article concerns Bayesian inference using deep linear networks with output dimension one. In the interpolating (zero noise) regime we show that with Gaussian weight priors and MSE negative log-likelihood loss both the predictive posterior and the Bayesian model evidence can be written in closed form in terms of a class of meromorphic special functions called Meijer-G functions. These results are non-asymptotic and hold for any training dataset, network depth, and hidden layer widths, giving exact solutions to Bayesian interpolation using a deep Gaussian process with a Euclidean covariance at each layer. Through novel asymptotic expansions of Meijer-G functions, a rich new picture of the role of depth emerges. Specifically, we find that the posteriors in deep linear networks with data-independent priors are the same as in shallow networks with evidence maximizing data-dependent priors. In this sense, deep linear networks make provably optimal predictions. We also prove that, starting from data-agnostic priors, Bayesian model evidence in wide networks is only maximized at infinite depth. This gives a principled reason to prefer deeper networks (at least in the linear case). Finally, our results show that with data-agnostic priors a novel notion of effective depth given by \[\#\text{hidden layers}\times\frac{\#\text{training data}}{\text{network width}}\] determines the Bayesian posterior in wide linear networks, giving rigorous new scaling laws for generalization error.
translated by 谷歌翻译
In this paper we study the smooth strongly convex minimization problem $\min_{x}\min_y f(x,y)$. The existing optimal first-order methods require $\mathcal{O}(\sqrt{\max\{\kappa_x,\kappa_y\}} \log 1/\epsilon)$ of computations of both $\nabla_x f(x,y)$ and $\nabla_y f(x,y)$, where $\kappa_x$ and $\kappa_y$ are condition numbers with respect to variable blocks $x$ and $y$. We propose a new algorithm that only requires $\mathcal{O}(\sqrt{\kappa_x} \log 1/\epsilon)$ of computations of $\nabla_x f(x,y)$ and $\mathcal{O}(\sqrt{\kappa_y} \log 1/\epsilon)$ computations of $\nabla_y f(x,y)$. In some applications $\kappa_x \gg \kappa_y$, and computation of $\nabla_y f(x,y)$ is significantly cheaper than computation of $\nabla_x f(x,y)$. In this case, our algorithm substantially outperforms the existing state-of-the-art methods.
translated by 谷歌翻译
This paper presents a solution to the GenChal 2022 shared task dedicated to feedback comment generation for writing learning. In terms of this task given a text with an error and a span of the error, a system generates an explanatory note that helps the writer (language learner) to improve their writing skills. Our solution is based on fine-tuning the T5 model on the initial dataset augmented according to syntactical dependencies of the words located within indicated error span. The solution of our team "nigula" obtained second place according to manual evaluation by the organizers.
translated by 谷歌翻译